140 research outputs found

    The structural correlates of statistical information processing during speech perception

    Get PDF
    The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences

    How to Create and Use Binocular Rivalry

    Get PDF
    Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles

    They saw a movie: Long-term memory for an extended audiovisual narrative.

    Get PDF
    We measured long-term memory for a narrative film. During the study session, participants watched a 27-min movie episode, without instructions to remember it. During the test session, administered at a delay ranging from 3 h to 9 mo after the study session, long-term memory for the movie was probed using a computerized questionnaire that assessed cued recall, recognition, and metamemory of movie events sampled ∼20 sec apart. The performance of each group of participants was measured at a single time point only. The participants remembered many events in the movie even months after watching it. Analysis of performance, using multiple measures, indicates differences between recent (weeks) and remote (months) memory. While high-confidence recognition performance was a reliable index of memory throughout the measured time span, cued recall accuracy was higher for relatively recent information. Analysis of different content elements in the movie revealed differential memory performance profiles according to time since encoding. We also used the data to propose lower limits on the capacity of long-term memory. This experimental paradigm is useful not only for the analysis of behavioral performance that results from encoding episodes in a continuous real-life-like situation, but is also suitable for studying brain substrates and processes of real-life memory using functional brain imaging. Experimental protocols that probe brain correlates of episodic memory formation commonly use paradigms in which memoranda are presented as individual items devoid of continuous context outside of the laboratory setting Movies are capable of simulating aspects of real-life experiences by fusing multimodal perception with emotional and cognitive overtones Here, we describe the use of a 27-min narrative movie to investigate long-term cued recall and recognition as well as metamemory judgments. We measured memory performance of several groups of participants, each at a different delay ranging from 3 h to 9 mo after watching the movie, by probing memory for events occurring in the movie ∼20 sec apart. Participants remembered many events in the movie that they had seen only once without prior instructions to remember it, even months after watching it. We have dissected multiple facets of memory and metamemory as a function of time and type of occurrence in the movie. Our analysis also suggests lower limits on the capacity of long-term memory for a real-life-like situation. Results Memory of the movie persists for months The first set of experimental groups watched the movie and answered the questionnaire ( Heuristic subdivisions in long-term memory Pairwise comparisons among all groups, corrected for multiple comparisons, reveal superior performance of the 3-h, 1-wk, and 3-wk groups compared with the 3-mo and 9-mo groups ( Recall attempts over time Participants made fewer attempts at recall as more time passed between study and test sessions ( Memory confidence over time The proportions of all answers made at different levels of confidence were compared across time-interval groups, revealing a surprisingly stable proportion of high-confidence recognition (HCR) answers over time, and a significant decrease in the use of recall over time. In contrast, the proportion of low-confidence recognition (LCR) and guess answers significantly increased over time ( One might expect a smaller proportion of correct answers as confidence declines with the passage of time. To test this assumption, the proportion of correct answers was calculated from all answers made in each level of confidence Furthermore, the main group effect suggests that distribution of correct answers between the four possible levels of confidence is uneven among time-interval groups. The interaction between these two main effects, time and confidence-level, was further explored by performing pairwise contrasts, revealing performance differences that further support differentiation of ST and LT groups. As can be seen in The findings described above indicate that confidence measures of memory are time-sensitive: more high-confidence answers (using recall and HCR) are used after short time durations, while more lower-confidence answers (LCR and guessing) are used after longer time durations. We further sought to characterize the temporal dynamics of metamemory measures using analysis of answers by confidence level. We find that on the one hand, proportion of overall use of recall declines over time Memory density The availability of memory performance scores that sample events every 20 sec in a 27-min episode renders it tempting to attempt tapping into long-term memory capacity per unit time, or density (Dudai 1997). Any such attempt is bound to yield only rough estimates. First, formal units that might be used for quantifying the stored memory, such as bits Using the algorithm specified in the Materials and Methods, we estimate that 56% of information in the movie is remembered after 3 h, while 53%, 39%, 25%, and 19% are remembered after 1 wk, 3 wk, 3 mo, and 9 mo, respectively. Equating for the sake of calculation a memory unit as a questionnaire item to be answered correctly and assuming independence among items, Long-term memory of a movie Learning & Memory 459 www.learnmem.org Cold Spring Harbor Laboratory Press on December 7, 2007 -Published by www.learnmem.org Downloaded from hence, 77 items that could be potentially encoded over 27 min, this implies retrievability ranging from 1.6 items per minute movie after 3 h to 1.4 and 1.0 items per 2 min after 3 and 9 mo, respectively. The robustness of the assumptions involved and the relevance to previous estimates of long-term memory capacity are discussed below. Recall and recognition as function of memory content Six independent raters were asked to classify the questions into eight predetermined categories (while allowing overlap of categories). Questions were then grouped into nonoverlapping clusters based on 66% agreement among raters. Clusters included plot themes, social interactions, couple relationship, and jokes and minor details (13, 19, 6, 5, and 14 questions, respectively). "Social interactions" is a nonoverlapping broader category than "couple relationship," and did not include questions relating to interactions between the central character and his partner, which were included in the "couple relationship" cluster. Twenty of 77 items in the memory questionnaire were not entered into analysis, as the agreement criterion was not reached. For each content cluster, we examined proportion of recall and recognition attempts We find that narrative elements and social interactions between characters are remembered best ("plot themes" questions elicited close to 100% accuracy in all ST groups). Questions about "jokes" and "details" elicit less recall attempts and answer accuracy declines more rapidly and drastically than for all other question clusters. Better performance of ST groups relative to LT groups is established for all content clusters, while comparison of time-dependent performance decline (i.e., curve slope) between content clusters approached significance only for correct recall performance. Significant main effects of content and time-interval group were found (recall attempts: content F (4,3224) = 62.27, P < 0 . 0 0 0 1 , t i m e -i n t e r v a l g r o u p F (4,44.13) = 4.96, P < 0.002; correct recall: content F (4,874) = 8.02, P < 0.0001; timei n t e r v a l g r o u p F ( 4 , 7 5 . 5 4 ) = 8 . 4 9 , P < 0.0001). The interaction between these effects (difference in performance between content clusters as a function of time-interval group, or comparison of slopes) was insignificant for both analyses, but approached significance for correct recall (recall attempts: F (16,3224) = 1.1, P < 0.35; correct recall: F (16,874) = 1.52, P < 0.09). In order to further explore main effects, pairwise contrasts were performed separately according to time-interval groups It is noteworthy that the division of LTM performance into ST and LT groups, introduced above on the basis of analysis of correct and incorrect answers, is also supported by the analysis of content-based correct recall answers, and is partially supported by analysis of content-based recall attempts. Subjects in the ST groups made significantly more recall attempts and more correct recall answers, per content cluster, than subjects in the LT groups. (Two exceptions are the "couple relationship" question cluster, which elicited a high proportion of correct recall answers throughout the tested span, and proportion of recall attempts of the 3-wk group that did not differ from both LT groups; Tables 1, 3.) While superior performance across content clusters is found Cold Spring Harbor Laboratory Press on December 7, 2007 -Published by www.learnmem.org Downloaded from for the ST groups, different content clusters elicited significantly different performance profiles over time. This is illustrated by observing that the starting point of maximal value for forgetting curves varies between plots of different content clusters We should qualify our finding that memory performance can be differentiated according to content elements, as it seems that time since encoding is also an important factor in rendering this differentiation evident. Further examination of correct recall data, where interaction between content and group effects was found to only approach significance, was performed using post hoc comparisons. When examining content effects within timeinterval groups, significant differences between content clusters were found in all but the 3-h and 1-wk groups Manipulation of movie and questionnaire did not diminish memory We further tested whether certain manipulations of the movie during the study session, or of the questionnaire in the test session, or both, will affect memory performance. A separate set of experimental groups all participated in the test session 3 wk after watching the movie, but were subjected to manipulated study or test material. Manipulations were either in the order of content material (scrambling the order of scenes in the movie or the questions in the test) or in perceptual attributes (eliminating color from the movie). One experimental group performed an interference protocol, in which subjects watched a different episode from the same sitcom at the beginning of the test session, and immediately afterward completed the original computerized questionnaire. No significant differences were found in performance (correct answers, collapsing confidence levels) between manipulation protocols and the original 3-wk group (unaltered movie and test, n = 8, 78.79 ‫ע‬ 2.02% correct; scrambled version of the movie followed by regular test, n = 10, 79.87 ‫ע‬ 2.38% correct; unaltered movie followed by scrambled test, n = 6, 73.81 ‫ע‬ 2.97% correct; scrambled movie followed by scrambled test, n = 7, 73.1 ‫ע‬ 2.76% correct; interference protocol, n = 7, 73.28 ‫ע‬ 2.39% correct; regular movie in black and white and regular test using black and white frames as visual cues, n = 6, 70.35 ‫ע‬ 2.99% correct; F (5,40.25) = 2.24, P < 0.07). It is noteworthy that although some of the manipulations did show a trend for decreased performance, scrambling of the order of the scenes in the movie itself had no effect whatsoever. The potential implication of this finding to the encoding of the study material is discussed below. Discussion We describe a memory paradigm in which the study material is a 27-min narrative movie. This paradigm was intended to mimic aspects of "real-life" learning and memory under controlled experimental settings. We tested the memory once, in delays ranging from 3 h to 9 mo after the study session. The test targeted events that occur in the movie every ∼20 sec. We found that details from the movie, which the participants watched only once without prior instruction to remember it, were remembered well over several months. Multiple performance measures indicate that long-term memory after hours-to-weeks is different from memory performance after several months. Recall answers, which we considered as the highest confidence answers, proved to be reliable measures of memory only for shorter durations, while HCR answers were highly reliable throughout the measured time span. Despite manipulation of movie and test materials, meant to disrupt narrative construction during encoding and/or retrieval, memory performance was unaffected. One possible explanation is that subjects were still able to successfully reconstruct the narrative from the scrambled segments. We further demonstrate that information content significantly influences memorability over time (though some time is needed for this effect to become evident). Finally, we use the unique resolution of our memory questionnaire to suggest that memory capacity for real-life information might be higher than previously estimated. In clinical neurology and in cellular neurobiology, memory after 3 h is already long-term memory, and memory after 9 mo could be considered remote memory. Studies of human longterm memory and remote memory are abundant in the literature (for review, se

    Connectivity in the human brain dissociates entropy and complexity of auditory inputs ☆

    Get PDF
    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Introduction Theoretical and experimental work in the fields of psychology and complexity science has arrived at two separate approaches for describing how stimuli may be encoded and what constitutes a complex stimulus (see On the other hand, the second, more recent view (e.g., Crutchfield, 2012) holds that simplicity/complexity depends on how demanding it is to model the underlying system that generated a particular stimulus or signal via the interactions of its states. From this perspective, there is a convex, inverse U-shaped relation between disorder and complexity. This is because highly ordered and highly disordered signals are typically generated by succinct, easily describable systems, whereas more sophisticated, or complex, systems generally convey intermediate levels of entropy. 1 Note that in this latter approach, complexity does not capture how difficult it is to veridically encode or reproduce any specific stimulus or signal, but rather how computationally demanding it is to model the system or source generating that signal. As can be appreciated, the two views described above are independent, and graphs depicting 1 For instance, ABCDABCD can be thought of as generated by a system (e.g., a transition matrix) that transitions between four states deterministically (a simple explanation), while a random stimulus can be characterized by a system where all state transitions are equally likely (a similarly simple explanation). http://d

    The Drosophila IKK-related kinase (Ik2) and Spindle-F proteins are part of a complex that regulates cytoskeleton organization during oogenesis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>IkappaB kinases (IKKs) regulate the activity of Rel/NF-kappaB transcription factors by targeting their inhibitory partner proteins, IkappaBs, for degradation. The <it>Drosophila </it>genome encodes two members of the IKK family. Whereas the first is a kinase essential for activation of the NF-kappaB pathway, the latter does not act as IkappaB kinase. Instead, recent findings indicate that Ik2 regulates F-actin assembly by mediating the function of nonapoptotic caspases via degradation of DIAP1. Also, it has been suggested that <it>ik2 </it>regulates interactions between the minus ends of the microtubules and the actin-rich cortex in the oocyte. Since <it>spn-F </it>mutants display oocyte defects similar to those of <it>ik2 </it>mutant, we decided to investigate whether Spn-F could be a direct regulatory target of Ik2.</p> <p>Results</p> <p>We found that Ik2 binds physically to Spn-F, biomolecular interaction analysis of Spn-F and Ik2 demonstrating that both proteins bind directly and form a complex. We showed that Ik2 phosphorylates Spn-F and demonstrated that this phosphorylation does not lead to Spn-F degradation. Ik2 is localized to the anterior ring of the oocyte and to punctate structures in the nurse cells together with Spn-F protein, and both proteins are mutually required for their localization.</p> <p>Conclusion</p> <p>We conclude that Ik2 and Spn-F form a complex, which regulates cytoskeleton organization during <it>Drosophila </it>oogenesis and in which Spn-F is the direct regulatory target for Ik2. Interestingly, Ik2 in this complex does not function as a typical IKK in that it does not direct SpnF for degradation following phosphorylation.</p

    Brains of verbal memory specialists show anatomical differences in language, memory and visual systems

    Get PDF
    Abstract We studied a group of verbal memory specialists to determine whether intensive oral text memory is associated with structural features of hippocampal and lateral-temporal regions implicated in language processing. Professional Vedic Sanskrit Pandits in India train from childhood for around 10 years in an ancient, formalized tradition of oral Sanskrit text memorization and recitation, mastering the exact pronunciation and invariant content of multiple 40,000–100,000 word oral texts. We conducted structural analysis of gray matter density, cortical thickness, local gyrification, and white matter structure, relative to matched controls. We found massive gray matter density and cortical thickness increases in Pandit brains in language, memory and visual systems, including i ) bilateral lateral temporal cortices and ii ) the anterior cingulate cortex and the hippocampus, regions associated with long and short-term memory. Differences in hippocampal morphometry matched those previously documented for expert spatial navigators and individuals with good verbal working memory. The findings provide unique insight into the brain organization implementing formalized oral knowledge systems

    Predictions as a window into learning:Anticipatory fixation offsets carry more information about environmental statistics than reactive stimulus-responses

    Get PDF
    published February 19, 2019A core question underlying neurobiological and computational models of behavior is how individuals learn environmental statistics and use them to make predictions. Most investigations of this issue have relied on reactive paradigms, in which inferences about predictive processes are derived by modeling responses to stimuli that vary in likelihood. Here we deployed a novel anticipatory oculomotor metric to determine how input statistics impact anticipatory behavior that is decoupled from target-driven-response. We implemented transition constraints between target locations, so that the probability of a target being presented on the same side as the previous trial was 70% in one condition (pret70) and 30% in the other (pret30). Rather than focus on responses to targets, we studied subtle endogenous anticipatory fixation offsets (AFOs) measured while participants fixated the screen center, awaiting a target. These AFOs were small (<0.4° from center on average), but strongly tracked global-level statistics. Speaking to learning dynamics, trial-by-trial fluctuations in AFO were well-described by a learning model, which identified a lower learning rate in pret70 than pret30, corroborating prior suggestions that pret70 is subjectively treated as more regular. Most importantly, direct comparisons with saccade latencies revealed that AFOs: (a) reflected similar temporal integration windows, (b) carried more information about the statistical context than did saccade latencies, and (c) accounted for most of the information that saccade latencies also contained about inputs statistics. Our work demonstrates how strictly predictive processes reflect learning dynamics, and presents a new direction for studying learning and prediction.We thank Leonardo Chelazzi for his comments. UH's work was conducted in part while serving at and with support of the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. The study was partially funded by a European Research Council grant to UH (ERC-STG 263318)

    Lipreading a naturalistic narrative in a female population : Neural characteristics shared with listening and reading

    Get PDF
    Publisher Copyright: © 2022 The Authors. Brain and Behavior published by Wiley Periodicals LLC.Introduction: Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. Methods: We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6–100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants’ brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. Results: Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. Conclusions: Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.Peer reviewe
    • …
    corecore